 Bonjour tout le monde, bienvenue à la meeting de la week-end infrastructure de GeneKint. Aujourd'hui, nous sommes le dernier jour de octobre 2023. Sur la table virtuelle, vous avez le thème du portal, Ferville Le Meurre, Mark White et Kevin Martins. Bonjour tout le monde. Donc, la week-end, la week-end de la week-end a élevé 2,430 dans le progrès. Donc, maintenant, nous avons des packages et des tags d'images de contenu. Il est construit. Il peut être terminé déjà, je ne l'ai pas checké depuis les 15 minutes, mais c'est un sujet de minutes. Est-ce qu'il y a quelque chose spécial pour cette week-end, Mark ou Kevin ? Parce que je n'ai pas checké les contenus de change log pour voir si il y avait des changements importants dans le corps. Donc, il y avait une fixation de la régression dans la pièce qui a été demandée pour être portée à 2,426.1. Et donc, cette pièce va être inclus dans la pièce de portation plus tard aujourd'hui. Et puis, la pièce de portation de portation devrait s'améliorer aujourd'hui pour que nous puissions révéler un candidat de 2,426.1 demain. Assumant que j'ai moi-même configuré sur le VPN. Est-ce qu'il y a quelque chose d'autre pour ajouter dans la relance de week-end ? Non, ok. Donc, dès que l'image de contenu est ok, je vais déployer deux week-end infrasciés et contrôlés depuis que Stéphane n'est pas là. Au moins, si vous voulez faire une réveillement, je n'en souviens pas. Est-ce que vous avez d'autres annonces, folks ? Je n'en souviens pas. Donc, nous pouvons commencer à regarder le calendarié de la prochaine. Donc, la semaine prochaine, une relance de week-end. Qu'est-ce que la surprise ? 2,431 sera relancée la semaine prochaine. La prochaine LTS sera 15 novembre 2023. Ce sera 2,426.1. Avec le back port mentioned earlier, si tout se passe comme expected. Et la relance de week-end pour cette prochaine LTS sera demain 1 novembre. Marc White est notre lead de relance. Il a déjà commencé à lancer beaucoup de bombes de week-end la semaine dernière à préparer la relance. C'est un bon travail. Oui. Nous allons regarder le public mail pour la réveillement de la sécurité. Nous avons ajouté une réveillement de sécurité pour les plugins de la semaine dernière. Tout le GenKinZ Infra a été patché le même jour. Vous pouvez voir le lien sur cette email ou sur le site de GenKinZ. Sur la section de l'archive de la réveillement de la sécurité. Il n'y a rien d'autre à ajouter. Il n'y a pas de nouveau plan public. On peut continuer. Il y a-t-il quelque chose d'autre à ajouter ? Non. D'accord. Dès la prochaine réveillement, comme d'habitude, je ne me souviens pas. Je pense qu'il y a un plan européen de la plan de DevOps World avant de terminer. C'est correct. Oui, c'est correct. Oui, c'est correct. Oui, c'est correct. Oui, c'est correct. C'est correct. De DevOps World London. Je crois que c'est en décembre. C'est le moment. C'est certain. Et nous avons aussi un summit de contributeurs de GenKinZ priori de Fosdou. Je pense que ce sont les deux prochaines réveils pour que vous puissiez faire faire le tour des membres de GenKinZ Infra. Pour London, On a Team Yakon, pour sûr. Je ne sais pas pour les membres. Oui, maintenant. Je ne vais pas être en Londres. Et Runcheyea va être en Londres. Mais d'autres, je ne vais pas attendre d'autres infoteam membres d'être là, à l'exception de Tim. Cool. Il y a quelque chose d'autre, sur le calendrier de comming, ou les grands événements. Ok, donc, nous allons commencer sur les tasks opérationnels. Donc, nous pouvons ouvrir l'issue suivante. Branche protection d'exception sur la scénarité de Genkin. Merci, Survey. Je crois que vous avez regardé cette check-in avec Daniel Beck. C'est correct ? Oui. Donc, l'issue deuxième est la même. C'est la même zone. Le but de ces deux issues était d'assurer que Daniel publie le système de scénarité de Genkin pour l'organisation de la scénarité de Genkin. Pour cela, j'ai besoin d'assurer que l'organisation de la scénarité de Genkin de l'organisation de la scénarité de Genkin Ok. C'est bon. Merci d'avoir regardé ça. Je suis sûr que Daniel est heureux. Oui. Oui. Oui. Il m'a donné un hiver. C'est drôle. Merci. Merci, Alex. J'ai taking care of visiting Oleg's community.genkin.ca.io to FAI system, because since two months, as far as I can, two or three months, Alex is also an administrator on community.genkin.ca.io. So thanks for taking care of this. We had the let's say an issue that don't need to speak about. Il y a un peu de milestone, mais oui, il y a des erreurs en pensant sur l'issue d'account. Donc, nous allons commencer dans le progrès du travail. Je vais essayer d'order cela en termes de priorité relative. Le plus important topic qu'on a aujourd'hui est l'update de la migration de la cloud. Depuis la semaine dernière, Stéphane a pu travailler sur la parallelisation du copier. Il est sur le contrôle de la contrôle de l'EIJK, qui est une rôle de 3 minutes de tourner pour générer le jis et ensuite de copier la destination qui va servir cet ordinateur. Aujourd'hui, c'est un processus simple qui peut se tourner rapidement et efficacement parce qu'on a seulement une machine et les expérimentations ont commencé par Hervé il y a quelques semaines et continué par Stéphane la semaine dernière, où il y a environ combien de temps a-t-il pris pour copier cet ordinateur pour les différentes destinations de l'EIJK. Donc, en ce cas, la parallelisation a été vraiment efficace. Nous sommes en train d'exprimer l'EIJK, copier et l'EIJK en même temps, en utilisant une nouvelle parallèle avec un bon succès. Le temps a l'air vraiment bon. Donc, le centre d'updates pour la parallelisation pour l'EIJK. Le déploiement fonctionne très bien par la parallelisation. Et maintenant, nous sommes en train de travailler dans le progrès sur le scan de l'EIJK. Triguer des scans de l'EIJK au final ceci est le тем de la vidéo. Nous sommes en train de travailler sur la coutel CubeNatives. formule d'attentation avec l'accessibility�. Pour que nous s'en cités, nous avons seulement un command excès pour l'EIJK, le command de l'EIJKboys, pour être exécuté sur le pod. C'est un travail qui a commencé par Stéphane et qui m'a dit que je suis en train de travailler sur cela maintenant, parce que Stéphane est offre un peu de jours. C'est donc dans une bonne direction. Maintenant, nous avons validé l'autorisation et l'authentication. La prochaine étape est de valider la connectivité des agents entrés. Nous devons proprement ajouter cette machine à la liste « Allo » pour atteindre le contrôleur de Kubernetes. Finalement, ajouter la credential à la liste « Trusted CI » et tester sur le script. La deuxième étape dans le progrès est la « Mirrored Bit Ingress » Donc, cela a été fixé. Maintenant, nous devons étudier la roule d'Ingress pour seulement couvrir le JSON file, ou le file que nous voulons servir seulement pour ce service, parce que nous avons utilisé l'ingress existant qui est installé pour distribuer beaucoup plus d'extensions de file, comme des files d'applications, des files de package, etc. Et donc, Damian, ce que tu vas faire là-bas c'est d'éloigner les choses que tu peux distribuer pour seulement distribuer le JSON et JSON file. Exactement. Ok, cool. L'idée n'est pas d'éloigner ou d'éloigner. Si c'est un file que nous savons que c'est un Mirrored, on envoie le Mirrored Bit Redirector, sinon, il va falloir retourner au service Apache. Je vois. Ce qui a le dotht axis, c'est le même que la machine virtual et cela va traiter un redirect sur sa propre, mais cela traite une roule de business, ce n'est pas une roule de Mirroredirection, ce n'est pas la même destination. Ok. Hervé, a-t-il mis quelque chose sur ce qu'on voit ou ce qu'on ne pense pas sur ce sujet ? Oui. C'est une bonne news. Jean-Baptiste Kampf a eu l'honneur de ce robot. Donc, on va pouvoir progresser. Le créateur de BLC, le joueur de l'Ontario, a acheté l'honneur de ce robot. Et il a déjà commencé la documentation pour l'Ontario. Il y aura une nouvelle honneur. Oui, c'est une bonne news. Ok. Ils ont effectivement adopté le Mirrored Bit, parce qu'ils sont des utilisateurs très actifs de Mirrored Bit. Il a été dans le même team à l'Ethique. Il a commencé l'Ethique à l'Ethique. Le créateur de BLC a commencé l'Ethique à l'Ethique. Il a commencé l'Ethique. Il a déjà commencé l'Ontario. Ok. Donc, ce n'est pas même un nouveau créateur. Tu dis qu'il a déjà été un créateur. Oui. C'est seulement promouvoir un rôle plus fort dans un projet existent. Oui. Oui. Oui. Je pense qu'il va accéder l'Ontario. Au moins, en travaillant pour l'Ontario, on verra un nouveau créateur. C'est bon. Bien. Ok. Merci, Hervé. Hervé, tu reviens. Qu'est-ce que c'est le statut de la transition de l'OM64 ? Alors, j'ai beaucoup travaillé sur une image pink. On avait à construire ça de nouveau, et ça n'a pas été configuré dans CID. Il y a un travail pour ça. Tout d'abord, j'ai arrêté le travail sur CID. Et j'ai mis la publication sur un part sur CID. Et puis, j'ai modifié l'OM64 pour utiliser l'OM64. Et alors, on a les appareils sur le paket de ce comité. Et on a les appareils sur le paket de ce comité. Donc on a les appareils sur le paket de ce comité. Et nous avons les appareils sur le paket image which is more than 2 years old, so after testing and deploying the image, I've been able to, just before this meeting, migrate I think to ARM64 with success. So, yeah, it's running and my manual checks were successful too. Next step will be regretting cert manager and data doc list agents, product request were in standby while I was working on the update link. And also, there are, I think, two more services which need an ARM64 image to be deployed, but we are close to our first items and we'll see next if we can migrate weekly to ARM64. So, first Jenkins controller without any questions. So, if you have anything to add, Damian? I didn't have time to work on the Falco upgrade because right now we have the Falco daemon set agent on ARM64 nodes which are crashing. It's not blocking what you are doing. It's not blocking the next step identified here, but I believe we should work on this. I propose that we prioritize Falco over Kubernetes 1.26 upgrade because at least on Azure, so we can still upgrade to Amazon this week. But yeah, I believe we should start deploying Falco. Given my availability, it need to be announced because the risk is that if the new Falco deployments shut down all connection on the production cluster, we might have issues, right? The only way to test it on a real life condition for network ports since it interact with Azure API, I will try a deployment on K3S, but honestly, we need to announce this one. I mean, the impact can be big, but we can change it very quickly so better to announce it and do it earlier on time. The reason why I would want to prioritize this over Kubernetes 1.26 upgrade on Azure is that Kubernetes 1.26 will imply new virtual machine operating system images on all nodes. So it's already crashing with the latest RM64 node with 1.25, so I fear that could be worth. So better to fix Falco or remove Falco and then finish the upgrade on Azure. Does it make sense and do you agree with that proposal? Yes, yes for me as well. Falco crashing by upgrading it to latest version or remove it. And remind me again, Falco provides a runtime security system. It helps Kubernetes to implement rules and policies a bit like what PSP were doing in the past. For instance, Falco ensure that we don't install images that are not allowed or from an allowed registry. You cannot use a data portal slash whatever image. No, I'm lying because I think I've added that rule, but you cannot install arbitrary image in theory. We could add way more policies and there are a lot of alternatives to Falco. But right now we have that instance. Maybe we can just remove it and continue, but the other goal is let's have it minimally strictly working on the cluster. So it's a form of policy enforcement engine that helps us comply with good practices for what we're doing on the cluster. Exactly. The rest of the RM64 element will be for later, such as virtual machine VPN etc. So I propose we not need to work on it right now. Still okay to work on this survey on the set manager, data dog and upcoming services at least. If you're interested by Falco, I don't mind you taking it, but by default I don't mind taking it either. So it's just that we have to do it and I don't have a preference. So based on your time and availability, you can choose and your motivation on that topic. Then upgrade to Kubernetes 1.26. Digital Ocean was successfully upgraded last week. That went really well. I had to do the upgrade through the Digital Ocean console now due to a low level terraform thing. I might have an issue, but it's really minor. I mean, between clicking on merge a pull request or clicking on upgrade cluster, I don't see any problem. That has been documented on the issue. So that can be redone by someone else than me. That's really easy and terraform continue working and tracking the rest. So that's not a problem. So now the work in progress is EKS upgrades. I've covered most of the change, all the change log from both Amazon and Vanilla Kubernetes. The only thing important for Amazon will be we will have to upgrade or ensure minimum version of one of the CNI plugin. That's the plugin that takes care of having virtual network across all nodes connected to the AWS VPC systems. We need to be sure that that plugin has a certain minimum version before starting the upgrade. Otherwise, everything will be shut down during the whole upgrade. And that might require help from AWS supports. That's written on their change log. So that's the only important thing. The rest are usual upgrades, so no risk there. CNI plugin version has a requirement. Otherwise, ready to go. The impact as a reminder, so we'll need to announce it, is all the plugin builds on CNI, you will be run on Digital Ocean. So for this no user impact. However, during the upgrade, the bomb build won't be able to run at all. Okay. The behavior, I'm not completely sure I need to double check before I will write this done on the issue. But as far as I remember, since we will disable the agent cloud on CNI, if there is a build starting during the operation accidentally, it will wait on the build queue until the operation is finished and the cloud is added back to the queue. So worst case, that will just wait a bit more without consuming resources. Good. So for me as a user, it would just be perceived as a delay, not a fail, not an outright failure of the bill of materials build. Exactly. Until operation is finished. Great. Is there any problem mark if at the end of the operation, I try to schedule a bomb build at least the first step just to be sure that it can schedule on the upgraded not pull. You're welcome to do it. You bet. In fact, you might, at the end of the operation, if you have permissions on that repository, just perform a Dependabot scan and ask Dependabot to scan the repository for any changes. Because if there's been a recent plugin release, it will launch a very lightweight build anyway. And that was going to happen no matter what, right? True. But that one only checked the same not pull as the plugin build, as far as I remember, while I need a bomb build to schedule on the specific not pull. Okay. And so that those Dependabot processed things certainly are very, very lightweight. I don't remember how we made them lightweight, but I know that they are very, very lightweight. So you are welcome to launch a build on the master branch. It would be healthy to after 15 minutes cancel that build so that it doesn't allocate all 200 or however many parallel machines it allocates. Cool. Given the constraint to do AKS, but need FALCO upgrade before, so most probably next week. Right. Is there any question about the Kubernetes 1.26 upgrade? As a reminder, digital ocean deprecate Kubernetes 1.26 since starting tomorrow. So we are okay. The next one will be Azure that will be in December and Amazon it's next year. So we are okay. So no more emergency on this one. But better to finish it off course. I don't remember if you and I were able to sync or if you were able to work on Matomo. Neither. Okay. Nothing done. Not enough time. I know that Packer goes work. So that one Windows Packer images. Are now using goss. External contributor. We had an external contribution. I did the support of playwright, which is a framework on install with Node.js to run headless browsers. One is now available on Linux. But the good thing is that they had it and goss sanity check for that new application. So test driven development almost, but at least non regression. So did that addition add something that uses playwright or it's only available as a tool to perform checks? Oh, no, it's it's added inside the CI Jenkins, your Linux worker, whether container or virtual machine. And the contributor was able to have the sanity check on goss so that every build of that Linux template check for playwright and its version. Okay, but it's not that we have active consumers of playwright yet. We already had. Yeah, it was installed in line during we have one project. I don't remember which one. And now it has been moved as a tool inside as a as a facility for everyone instead of installing it on each build. Got it. Okay, so instead of doing a tool download now, it's already there. Okay, that's the idea. And it's kept updated with updates. It already was updated two days after the initial release. Nice work. Thanks for the contribution. And now Stefan is working on covering all the all the next tests. All the Linux test with goss. And then the next step will be add goss common harness. So the idea is that most of the commands are called the same way on Windows and Linux. For instance, whether you are on Windows or Linux container virtual machine, the expectation of what we provide to developers on CI Jenkins I.O. Is that Java space dash version should return gdk 11. And the result should be the same on whatever operating system. So that will be a common framework. And then we will keep the current Linux and Windows that will be specific for each operating system. When there is operating system specificity, such as calling directly other gdk installation, which require absolute path. So you have to use different commands based on Windows or Linux. New gdk 21. So the work that everybody on the official Jenkins Docker images has been used. And the work from Stefan and Bruno Verarten also. So now the last mile is changing to the latest GA gdk 21 for the gdk tool on our controllers. But the backer image and the Java version on every other pieces of the infrastructure are now using the latest gdk 21. So that's 21.0.1. So, or is it okay? Absolutely. We gdk tool and trusted build agent, my bad. Just a note, the S390X agent, keep using the EA version for the tool at least. So thanks Array for the work you did on the official Jenkins image. We copy and pasted your work and it just walked out of the box. So thanks for that effort. Is that, is there any question on that topic? I believe this. Sorry? More for a platform subject using binaries instead of dimmerine image. Yep. Yep, not really. Absolutely. And Bruno and answered my question on system 390. As far as he can tell, we don't have anything to fear that they will never support system 390 running Java 21. Eventually we'll get a full release. Right now that we're using early access is better than nothing. Exactly. It's just they want to run a full quality insurance framework and they didn't have time and resources to do it. That's why they provide a build with the partial framework testing area as it. They haven't decided yet. Maybe they will never, maybe they will. But it's really hard to ask the probability here. Yeah. Good. I read some issues on their, on their, some prerequisites on some issues on their repository. Some architecture, some lesser architecture are unstable, have unstable build. So it's also contributing to the very hesitation. Thank you. Thanks. On the topic that we didn't have time to spend on. I believe since Stefan is there, no work could have been done on the spinning up the docker image library to create push tag at the same time for both GH and Docker. I propose I will move this one to the backlog for next milestone until Stefan has a full complete week and then you can spend time on it or someone else. Is that okay for everyone? Yes. It's important, but not a blocker. Back to backlog for now. Mirror status link from get Jenkins IO. So I've started to check this topic. I still analysis because I don't want to go too deep, but the status.html file is not copied to the mirrors. But I'm not sure if it's just a few or others, and I need to check as part of the data center copy part during the packaging deployment. Not sure if it should be. We have to decide. Is there a reason to copy it? If there is a reason, then we need to make status HTML, not mirrored and served by the Apache of get Jenkins IO. Or otherwise we need the mirror to copy it. That depends. That's also related to do we need it or to update the link. And also there is something fishy here, but that's not the first topic. We need to check requirements to have archive Jenkins IO as the fallback. To have archive Jenkins IO as fallback. It's most probably born with in digital ocean related. Because right now we have OSUSL as fallback, which means if no mirror is served, that will fallback by default to OSUSL servers. And OSUSL servers are not providing all the history while archive does. So that's why I believe archive Jenkins IO should be okay for that role. Instead of spinning up another service that will play that fallback role. I believe archive already does it. So better to consolidate resource and increase the capacity if needed. Or move it somewhere else. But I believe we should use it instead. Any question? All good. All good. Okay, thanks. I still need to contact Bellnets admins. I got the email, I forgot to send the email last milestone. I still believe the issue is on the end user issue and not on Bellnets, but I need to check with them. And if they say it's end user, then I will make them discuss that won't be our problem and I will enable Bellnets. Oh, I forgot to read that one is important. Let me move. Keeping it here. Can you give us a head up on the? I will plan it. I think this week. Thursday. I've got the backup. I still have to check in. Oh, it should, but I need to check if I can restore the root folder files. Which is the only one concern by maybe by the direction flag. And we will be able to make a request to. Okay, so the dismaying thing there is what this means is we will have many of the back end extension. Many of the extensions entries that will correctly be deleted. That's a good thing, but it will now amplify the fact that our current extension indexer is broken and needs to be rewritten. It's unfortunate that that's what it's going to do. We're going to get more more people saying, hey, where did all my extensions list that I was in the answers. Sorry, our extension industry indexer is broken and we haven't been able to take the time yet to rewrite it. Whoopsie. And sorry, I just I'll be right back. I think that's all for that topic. So planning Thursday. Cool. Next topic planning for support Tdk version. Tdk 19 cleaned up. I need one last research to see if I haven't some leftovers. I'll wait usually 24 hours before the index of the research for GitHub. Stop showing me things that were that are that have already been removed. And then we have the two plus two plus two. So we still need. Documentation update. To point to the JEP, whatever situation it will be. I saw some development yesterday between Mark and Basil on that topic. So it's not there yet. However, does it feel good for everyone? Does it feel right that we we consider that topic term finished once we will have updated. Pointing to the JEP and we will update it later. At least we have a reference for people who want to see what would be the policy and we already know what what the policy will be for the info. If I propose if we need additional elements on the info part, that should be part of the GEP implementation part. Is that okay for everyone. And that aligns with what Basel suggested Basel's recommendation was let's create. He specifically said he's not ready to approve the JEP. Unless it has checklists, what it means to do and add a new Jenkins version recommend a new Jenkins version and or Java version and remove a Java version and and after looking through the things we did for Java 21. Absolutely. He's right. We need a checklist. Right. It's very much. There are lots of things that have to happen yet. Changes in documentation, changes in infra, changes in container images, changes in build tools, all sorts of places. So a checklist is a really good thing. Good point. So let's wait for at least the checklist for infra being written on JEP before closing that topic. Is that okay? Right. And then we can contribute because I've, I've just started the process of creating that the, the, the ad checklist using Basel's original template. There will be lots to do there because I'm reading the old tickets from the completed tickets for Java 21 and putting checklist items. Oh yeah, we did that, didn't we? We did that. Any question, objection, need for clarification on this topic? Cool. A few new items to add to the upcoming milestone. Alex open an issue about merging core pull request is not canceling their builds on CI Jenkins. That's a setup that we have applied on the GitHub organization scanning for all plugins, as he mentioned on the issue. So that's a setting to apply on CI Jenkins. I mean, that's a good point. I took it was already the case. If no one object, I'm taking this action that should be done later today to be done. And of course, just a reminder long term, you know, the unicorn world unicorn world with rainbows, just in case it's not clear. It's job DSL lisation of CI Jenkins I would avoid such configuration shift. Let's say, or at least it will be easy for them to send a pull request and etc. Yes. Question, maybe off topic. Does the use of merge queue would help or has been studied for this? That won't solve the problem here. That will help on many other topics. That's true, but that would that has anything to her to have with that problem. Not this one. Yeah, okay. I believe you started to discuss with Christian earlier today about the other topic start a new repo for the Jenkins contributor spotlight feature on Jenkins. I haven't read the topic yet. Yes. He was asking how to migrate his repo for the new contributor spotlight feature to the Jenkins software org. So I told him he could work himself. He's going to the Jenkins software organization as a member of it. And I asked him to open an update issue for his recruitment. As a note, I've been wanting to replace stories, I think, but I hope it's important. I'm going to open an issue for the repo permissions and maybe later for our procurement. Okay, I think the repository is like the last thing we will have to think about. Now we have to define the websites, the deployment, and then we will shift left until the repository. That's really important because I mean, I don't mind having a repository somewhere with permission, but I mean, that's a detail here. Your question was really good. Is it plan to replace stories Jenkins IO, the website fully by the content of the new websites? Yeah, so Damien, could you open stories.jankens.io that way everybody sees what the what what. So this is today as a collection of stories about users of Jenkins. And the contributor spotlight is about people who are contributing to Jenkins. And Chris has done, Chris, thanks with help from Kevin and from Christina Pizzagalli has done a new web, a new design for a page that focuses on these contributors. But my assumption was it's not replacing this, it would rather somehow live beside it. So, you know, since Kevin's here, I was assuming it would be contributors.jankens.io instead of stories.jankens.io because stories is already used for user stories. But if Chris's intent is to replace it with, with, have, have the content that's on stories still available, and contributor spotlight also available, that's great as well then we could just use the stories repository or whatever. Yeah, I don't think the idea at all is to replace any content at this point in time, it's just simply to add the contributor spotlight stuff. We were talking, and I've been commenting on the thread but essentially looking at either having it be a separate page within the stories.jankens.io, kind of like how we have the blog and it's pages and it's posts and everything. Or, if it's easier to potentially have it be a different domain like this contributors.jankens.io as Mark had suggested or put out there. And then my thought was, well, if we extend the stories definition and include that content, that gets more viewers and interaction with the stories.jankens.io page in the first place. So that provides the content and gets more interaction potentially more visitors and other, you know, exploration on that. Whereas the contributors domain would spotlight the point of what we're doing, which is more in line with the idea, but not necessarily a better option for rendering the page providing it deploying the page stuff like that. So, yeah, it's not intended to replace stories.jankens.io in any way, it's supposed to either extend that or become its own thing. Okay, so, and now I don't. So, so if it's, if it's being added to stories.jankens.io as its URL, then it would seem logical to me that its content would go under the jankens.info slash stories repository. But then it's, I don't know if jankens.info slash stories is Gatsby generated or something else. So, so that's a different problem than for Chris to worry about I don't don't recall how stories is generated. Oh, it is Gatsby. Okay. And then it may be that, and if, if Chris and others can figure out how to make the design of the contributor page fit and navigate well with the story site, all the better, that's wonderful. But that's different that I think then the separate repository he's currently got or, or we need a way to, on a single top level domain, use to GitHub repositories to contribute to it, which is also fine. Yeah, go ahead. I don't know. I mean, I would suggest, but at first sight, I would suggest getting this new website part website, running with our own contributors and for getting what the sense of what is what they look like or contributor to that. Maybe, and then, if it can be, if it can be, how it can be integrated in stories. And if a second repo is needed, or if it can be included into stories one. That's how I would have. And for me, that would, that would be fine as well. I just, I don't want to lose the content of stories. I like, I like what you're saying out of a, I think, if we, if we get a first, first visual vision visual of it, although we've actually already got that first visual in a prototype site, if I remember correctly, Kevin, he's already deploying to the preview site. Yeah, and I just included that link in the chat. So it is there. It's fully available. It's clickable. Everything leads to the other parts of the site. So we do have a live example for what has already been designed. Okay. Contributors, Jen can say, oh, so that's the main question. I would say deploying on the same websites, by website, I mean a constant domain name and then eventually adding some relative path that that is quickly a nightmare. And that is not sustainable. Never has been since the 25 past year. So I don't, I don't think we should try this kind of word thing. However, either like Mark did, I think my personal opinion is either you walk on integrating it inside stories Jenkins, your web code, which is already gets PS mark proposed, or the solution that you mentioned with a secondary website, we could have a redirection. Like if you go from stories Jenkins, you can add a top level contributor story that sends to slash contributors, which will trigger redirect to the contributors. Jenkins, web site, for instance, which we make two different life cycle contribution life cycle, but we still have a redirection from both and using the web component stuff, you could have an unified design between both if needed. Does it make sense. Yeah, so so I think what you're describing for me would be. We've got today we've got a in the web component we have a success stories clickable link on the top bar, and we would add contributors stories as an additional clickable link in that top bar. I don't think we're anywhere near overflowing the width of the top bar. So that seems reasonable to me. And I just looked at the preview website, it's already integrating the Jenkins components, not on the homepage, but on the other one. So yeah. And I also think I think we've been working under the idea of adding it as a separate link to the components. So Chris is in my email thread with Chris, he's already aware, very well aware of that and plans on submitting that ticket as well. Okay, so if it's a separate component, then it doesn't need to be on stories dot Jenkins dot io et all, then I think is what you're saying. Okay. Yeah, I mean, I was, I wasn't sure what would be easier and what would be more manageable for the or the repo or the organization itself so I thought having it be a separate site like contributors dot Jenkins io might be easier in that sense because, like we said, we can always redirect, we can connect. We can get it other various other ways we have the components to help us connect everything so that just wasn't sure if that would be more work in the background, basically. Honestly, we adapt. It's just that the having to isolate website merge on single website deployment tends to be a nightmare already quickly. It start to be visible only for the SRE, but quickly it will be the case of the person contributing it to, because most of the time it's okay for a given URL, where do I contribute to. And then the question start say oh no it's nightmare. No problem though to have subdomain. For instance, I don't say we should. I just say it's really easy to have something such as contributors dot stories dot Jenkins dot io. This one works with the secondary website option, because subdomains are absolutely different beasts than stories Jenkins io slash contributors, for instance. So we could have users that stories. Absolutely. So that's also a way to unify both websites with stories Jenkins io being a kind of wrapper that could send you to one or the other in the future. But that's still the idea of a secondary website like you proposed. I think that works great. I think that would be a good overall solution combining both using it as a subdomain instead of just a domain. Yeah. User stories is not monetary at all. I'm thinking about your which might be impacted by using a subdomain. I don't know if they will benefit from the background from stories that Jenkins io. It's a detail. Good to know. Okay, so that mean now the work you started the review. It's absolutely in line with this now that we have covered the high level thing. That mean, can you remind us what you drove Chris to do. I asked him to open up on a desk to. Okay. Yes. And I have also proposed him to put in place a specific team. Web site. To work. Some permission already but. When when when the team will be defined, I think. It was a way and you keeping will be integrated in system and removed as individual. Okay. Which means the next step are. C.I. CD production. These are the main top level component. Is that okay? Do you see the top level component. So C.I. is preview website. So I'm not going on details. If you want to take that topic. I don't mind. The idea is to define the checklists like basillite. Same thing here like Marc likes. I. Details. Different steps. Everyday community service. Everything. Absolutely. ça doit être un bon rappeur pour un, c'est ça ok ? Ok. Ou alors que tu préfères avoir un petit niveau de l'alimentation... Non, je voulais dire qu'au début je lui ai demandé pour différents aspects de l'alimentation, mais avoir un point de vue en termes de l'alimentation c'est bon. Je vais probablement regarder... Oui, il y a un point de vue, je vais le voir. On peut en parler en paire ou en suivi, ou si vous voulez les définir avant si vous avez le temps. On peut maintenant retourner à Criss et dire qu'on a une idée, on a discuté sur le niveau de l'aile. On a la route, on a la direction et maintenant on peut aller au niveau de l'implementation. Est-ce que c'est bon pour tout le monde ? Oui. Oui, c'est bon pour moi ? Ok. Quick check, si vous avez de nouvelles issues, on a d'autres. Ok. Je n'ai pas checké tout de suite. Où est ma question ? Ok. Ok. Je n'ai pas checké tout de suite. Ok. Je n'ai pas checké tout de suite. Ok. Je n'ai pas checké tout de suite. Ok. Je n'ai pas checké tout de suite. Où est ma question ? En Europe. Oui. C'est pour l'intervention de Bruno, qu'est-ce qu'il peut contribuer ? Ok. Si je veux ouvrir la page... C'est pour l'intervention de Bruno. Oui, c'est pour l'intervention de Bruno. Ok. Ok, donc, je crois que vous et moi pourriez le faire. J'ai vu vous envoyer un message pour cette première enquête. J'ai proposé un plan pour la prochaine milestone. C'est bon ? Oui. Ok. Prémotion de triage. Mirobit. Gotpod Restart on Publicate. C'est quelque chose que Stéphane a envoyé la semaine dernière, en marchant sur le Mirobit. Oui, on a http5 au horreur. C'est le dernier pour le mois. Il n'y a pas de message horreur associé. On ne voit pas une alerte ou une issue. Personne n'a complété. Ce n'est pas bon, mais ce n'est pas mal. J'ai proposé que ça va au backlog. C'est principalement parce que depuis que nous avons probablement un update de Mirobit, ce sera bientôt. Ça va marcher. Mais si quelqu'un veut se débugger, mais je ne pense pas que c'est un blocage maintenant, on peut probablement avoir besoin d'enlever l'extensive log. Le charte a été updates sur le nouveau Mirobit parent, mais ça requiert de changer l'instance du nouveau charte. Avez-vous vu d'autres éléments, que ce sont ces éléments sur votre côté? C'est un peu relativement, mais en regardant un autre code, j'ai vu que l'un des modélateurs qui recommandait ce code est le hash loop backup. Mais à ce point, nous n'avons pas d'importance. Oui. Pourquoi pas ça nous permettrait de voir ça plus tard? Mais je ne pense pas que ça va changer d'autre chose et que c'est d'accepter un problème. Parce que ce n'est pas OOM killed. C'est juste que le code a commencé. Et le service va retourner à normal. Et nous n'avons pas d'erreur log. C'est encore difficile. On va voir si le nouveau Mirobit est construit avec une version plus moderne, avec l'imprové de GC, et le management de mémoire, et le management de ressources ne changent pas et l'imprové de la situation ici. Parce que la version de goleon utilisée pour construire le Mirobit est de 2018. Oui. Donc je propose qu'on retire le backlog. Est-ce que c'est ok pour vous? Et la dernière. Elle a été ajoutée par Stefan, qui pourrait être partie de l'update de l'UC, mais peut-être pas encore. Maintenant, nous installons CubeCuttle sur l'agent trustee, avec la même version de notre cluster. Une fois que nous avons upgrade cette cluster à Kubernetes 1.26, nous voulons que CubeCuttle soit updates à la même version. Les alertes de spoiler, ce sera pour 1.27, parce que Stefan a déjà évoqué CubeCuttle 1.26 en avance, parce que nous avons déjà utilisé CubeCuttle 1.26, mais encore une fois, nous voulons l'updater. C'est facile pour quelqu'un qui veut jouer avec l'update CLI, parce qu'il faut trouver une version public-repository sur un texte, et d'updater un file sur un autre repository. C'est une chose pure d'update CLI, et ça devrait être possible. Donc si quelqu'un est intéressé, il peut commencer. Qu'est-ce qu'il y a pour 1.27? Absolument. J'adore quand le plan s'occupe. Je me souviens si nous devons prendre l'une de ces bonnes premières issues. Ok. C'est important. Ok. Pourquoi pas? Je ne suis pas sûr de dire. Bonne première issue, et un backlog. Ok. Ce sont les dernières deux issues. Est-ce qu'il y a quelque chose d'autre que vous voulez avoir? Ou est-ce que vous êtes ok de couvrir le meeting? Oui. Cool. J'ai stoppé le screen share, j'ai stoppé le recording, donc bye bye pour tout le monde d'avoir regardé.